ESG Conference 2025: a conversation with Steven Richman
Steven Richman is Chair of the IBA Bar Issues Commission, and represents the Bar Issues Commission on the IBA AI Taskforce. Steven is also a member of Clark Hill PLC, based in Princeton and New York, and has extensive experience in national and international litigation across broad areas of commercial and business practice, as well as arbitration and appellate practice. Steven has written and spoken extensively on the areas of international law, business and human rights, contracts, litigation and arbitration, and professional responsibility. Steven is a past chair of the American Bar Association’s International Law Section, serves in the ABA House of Delegates, and is the ABA Representative to the United Nations.
Listen on SpotifySara Carnegie (SC): Steven Richman is Chair of the IBA Bar Issues Commission, or the BIC as it's known for short. He is also a member of Clark Hill PLC, based in Princeton and New York, and has extensive experience in national and international litigation across broad areas of commercial and business practice, as well as arbitration and appellate practice. Steven has written and spoken extensively on the areas of international law, business and human rights, contracts, litigation and arbitration, as well professional responsibility. He has led the BIC work on artificial intelligence over the last three years. Steven is also Past Chair of the American Bar Association's International Law Section. He serves in the ABA House of Delegates and is also the ABA Representative to the United Nations.
Steven, thank you very much for joining us. It's great to have you and it was really interesting to hear about the interface between artificial intelligence and ESG. You outlined how AI can both support and threaten human rights. And in your view, it would be good to just expand on what you think the single most important step is that companies can take to ensure these tools respect rather than undermine those rights.
Steven Richman (SR): The most important thing that needs to be done is education. AI brings an extraordinary amount of opportunity and facility. It can cut research time. It can streamline the process, whether it's legal research or other research, but you have to approach it with caution. AI itself, if you ask ChatGPT, will outline its own limitations. It will admit mistakes are made. […] Also important is the database upon which it's drawing. We need not get into a discussion as to how AI thinks or whether it thinks and compare it to us. The important point to note is that it assembles a variety of resources in a comprehensive format so that it facilitates in a way the outline or the approach that you can take. The key thing is to look at the sources, always check them, but it does help give you an outline for your process. In terms of the advantages, it can help level the playing field. It provides access to information across the spectrum. On the other hand, the risks from a human rights or ESG standpoint are if the database is skewed, the result will be skewed. That can affect discrimination issues. It also can pose a risk to privacy in terms of what it's drawing on. And also there's a concern about the ability of people to suppress speech in using AI. But the flip side of all that are the advantages where you can breeze through the airport, for example, on face recognition, things like that.
Emily Morison (EM): Steven, you touched on the interface between AI and environmental considerations, and we're conscious that AI has a massive energy footprint. We've seen a large number of tech companies need to invest significantly in resources for the purposes of building out their energy capacity so that they can operate these data centres. What do you think the IBA's role can be in helping to educate the legal profession around the environmental impacts that might be associated with using AI in the course of legal profession?
SR: The IBA has engaged in focusing on artificial intelligence as a priority. As you know, we issued our report last fall. My division, the Bar Issues Commission, focused on legal ethics issues and guidance for lawyers from a professional discipline standpoint into how they approach AI. But other divisions are focused on regulatory issues and so forth. So, I think the IBA plays an important role in educating people on the climate issues, produce content that provides guidelines and checklists, if you will, for how bars can approach the issue, but also educate people. The more people know, the more you start to change the paradigm. The more people understand this is not a free resource, that there is accountability and that there are costs and damage when you add it up in the aggregate, the more you can develop policies to control the use.
SC: Our last question today, Steven, concerns the role of legal advisers, which is inevitably impacted by the evolution and development of artificial intelligence. How do you think that will change in the next five years as AI tools become more embedded in decision-making and law firm management?
SR: There are two ethical rules that really dominate the answer to that question. One is competence and jurisdictions are starting now to incorporate into the definition of competence, or at least the commentary on what it means to be competent, a working knowledge of artificial intelligence. The rules and the commentary don't require each lawyer to be an expert or a technician; they do require the lawyer to understand, at a minimum, the risks and benefits of AI that are necessary to the representation.
The other one that jumps out is in various jurisdictions, the role of lawyer as trusted counsel, or in the American Bar Association, the Ethics Rule 2.1 that talks about the role the lawyer as advisor. This really draws on the obligation of the lawyer to put legal advice in context. And to the extent we're talking about climate change or ESG issues, the activity of a particular client may not be or is not in a particular case illegal, but the lawyer would have an obligation to advise the client, in my view, as to the context of the legal advice. So going back to the example we just gave as to climate change and impacts, I'm not saying the lawyer has to say to the client yes, you can hold that conference with your employees, but understand that you're going to do this much damage to the environment by using artificial intelligence. You don't necessarily have to put it that way. But if the client is saying, what would be the impact of our image, then you certainly have the scope to advise on that. And even if the clients doesn't ask it that way, if it's a significant enough issue in your mind to help put your legal advice in context, then you can do it. The example I gave may not be the most striking example, but it's just really saying, yes, you can build that factory in that jurisdiction. These are the permits you need and it would be lawful to do so, but you should be aware that there's a certain view in that jurisdiction based on certain activities that it may be harmful or may be deemed discriminatory or may deemed to have some other impact on social issues and there it would be within your remit in fulfilling your role as an advisor, even if you're dealing with soft law to put it into hard law advice to put in context.
ESG Conference 2025 wrap-up episodes
ESG Conference 2025: a conversation with Damilola S Olawuyi
Released on Aug 12, 2025
ESG Conference 2025: a conversation with Pascal Durand
Released on Aug 12, 2025